74 research outputs found

    Об ΠΎΠ΄Π½ΠΎΠΌ ΠΌΠ΅Ρ‚ΠΎΠ΄Π΅ контроля работоспособности ΡΠ΄Π²ΠΈΠ³Π°ΡŽΡ‰Π΅Π³ΠΎ рСгистра

    Get PDF
    ΠžΠΏΠΈΡΡ‹Π²Π°Π΅Ρ‚ΡΡ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΎΠ½Π°Π»ΡŒΠ½Π°Ρ схСма устройства контроля работоспособности ΡΠ΄Π²ΠΈΠ³Π°ΡŽΡ‰Π΅Π³ΠΎ рСгистра, основанного Π½Π° ΠΌΠ΅Ρ‚ΠΎΠ΄Π΅ ΡƒΡ‡Π΅Ρ‚Π° Π²Ρ€Π΅ΠΌΠ΅Π½ΠΈ сдвига Π΅Π΄ΠΈΠ½ΠΈΡ†Ρ‹ Ρ‡Π΅Ρ€Π΅Π· рСгистр. Π’ устройствС ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ Π΄Π²Π΅ Π΄Π²ΡƒΡ…Π²Ρ…ΠΎΠ΄ΠΎΠ²Ρ‹Π΅ схСмы совпадСния, линия Π·Π°Π΄Π΅Ρ€ΠΆΠΊΠΈ

    On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition

    Full text link
    In conventional speech recognition, phoneme-based models outperform grapheme-based models for non-phonetic languages such as English. The performance gap between the two typically reduces as the amount of training data is increased. In this work, we examine the impact of the choice of modeling unit for attention-based encoder-decoder models. We conduct experiments on the LibriSpeech 100hr, 460hr, and 960hr tasks, using various target units (phoneme, grapheme, and word-piece); across all tasks, we find that grapheme or word-piece models consistently outperform phoneme-based models, even though they are evaluated without a lexicon or an external language model. We also investigate model complementarity: we find that we can improve WERs by up to 9% relative by rescoring N-best lists generated from a strong word-piece based baseline with either the phoneme or the grapheme model. Rescoring an N-best list generated by the phonemic system, however, provides limited improvements. Further analysis shows that the word-piece-based models produce more diverse N-best hypotheses, and thus lower oracle WERs, than phonemic models.Comment: To appear in the proceedings of INTERSPEECH 201

    No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica in End-to-End Models

    Full text link
    For decades, context-dependent phonemes have been the dominant sub-word unit for conventional acoustic modeling systems. This status quo has begun to be challenged recently by end-to-end models which seek to combine acoustic, pronunciation, and language model components into a single neural network. Such systems, which typically predict graphemes or words, simplify the recognition process since they remove the need for a separate expert-curated pronunciation lexicon to map from phoneme-based units to words. However, there has been little previous work comparing phoneme-based versus grapheme-based sub-word units in the end-to-end modeling framework, to determine whether the gains from such approaches are primarily due to the new probabilistic model, or from the joint learning of the various components with grapheme-based units. In this work, we conduct detailed experiments which are aimed at quantifying the value of phoneme-based pronunciation lexica in the context of end-to-end models. We examine phoneme-based end-to-end models, which are contrasted against grapheme-based ones on a large vocabulary English Voice-search task, where we find that graphemes do indeed outperform phonemes. We also compare grapheme and phoneme-based approaches on a multi-dialect English task, which once again confirm the superiority of graphemes, greatly simplifying the system for recognizing multiple dialects

    A baseline system for the transcription of catalan broadcast conversation

    No full text
    The paper describes aspects, methods and results of the development of an automatic transcription system for Catalan broadcast conversation by means of speech recognition. Emphasis is given to Catalan language, acoustic and language modellingmethods and recognition. Results are discussed in context of phenomena and challenges in spontaneous speech, in particular regarding phoneme duration and feature space reduction.Postprint (published version
    • …
    corecore